Wide crawls of the Internet conducted by Internet Archive. Please visit the Wayback Machine to explore archived web sites. Since September 10th, 2010, the Internet Archive has been running Worldwide Web Crawls of the global web, capturing web elements, pages, sites and parts of sites. Each Worldwide Web Crawl was initiated from one or more lists of URLs that are known as "Seed Lists". Descriptions of the Seed Lists associated with each crawl may be provided as part of the metadata for...
Content crawled via the Wayback Machine Live Proxy mostly by the Save Page Now feature on web.archive.org. Liveweb proxy is a component of Internet Archive’s wayback machine project. The liveweb proxy captures the content of a web page in real time, archives it into a ARC or WARC file and returns the ARC/WARC record back to the wayback machine to process. The recorded ARC/WARC file becomes part of the wayback machine in due course of time.
Survey crawls are run about twice a year, on average, and attempt to capture the content of the front page of every web host ever seen by the Internet Archive since 1996.
Topic: survey crawls
The seed for Wide00014 was: - Slash pages from every domain on the web: -- a list of domains using Survey crawl seeds -- a list of domains using Wide00012 web graph -- a list of domains using Wide00013 web graph - Top ranked pages (up to a max of 100) from every linked-to domain using the Wide00012 inter-domain navigational link graph -- a ranking of all URLs that have more than one incoming inter-domain link (rank was determined by number of incoming links using Wide00012 inter domain links)...
Wide17 was seeded with the "Total Domains" list of 256,796,456 URLs provided by Domains Index on June 26th, and crawled with max-hops set to "3" and de-duplication set "on".
Web wide crawl number 16 The seed list for Wide00016 was made from the join of the top 1 million domains from CISCO and the top 1 million domains from Alexa.
The seed for this crawl was a list of every host in the Wayback Machine This crawl was run at a level 1 (URLs including their embeds, plus the URLs of all outbound links including their embeds) The WARC files associated with this crawl are not currently available to the general public.
A daily crawl of more than 200,000 home pages of news sites, including the pages linked from those home pages. Site list provided by The GDELT Project
Topics: GDELT, News
Web wide crawl with initial seedlist and crawler configuration from January 2015.
The seeds for this crawl came from: 251 million Domains that had at least one link from a different domain in the Wayback Machine, across all time ~ 300 million Domains that we had in the Wayback, across all time 55,945,067 Domains from https://archive.org/details/wide00016 This crawl was run with a Heritrix setting of "maxHops=0" (URLs including their embeds) The WARC files associated with this crawl are not currently available to the general public.
Web wide crawl with initial seedlist and crawler configuration from April 2013.
Web wide crawl with initial seedlist and crawler configuration from June 2014.
The seed for this crawl was a list of every host in the Wayback Machine This crawl was run at a level 1 (URLs including their embeds, plus the URLs of all outbound links including their embeds) The WARC files associated with this crawl are not currently available to the general public.
Crawl of outlinks from wikipedia.org started March, 2016. These files are currently not publicly accessible. Properties of this collection. It has been several years since the last time we did this. For this collection, several things were done: 1. Turned off duplicate detection. This collection will be complete, as there is a good chance we will share the data, and sharing data with pointers to random other collections, is a complex problem. 2. For the first time, did all the different wikis....
This "Survey" crawl was started on Feb. 24, 2018. This crawl was run with a Heritrix setting of "maxHops=0" (URLs including their embeds) Survey 7 is based on a seed list of 339,249,218 URLs which is all the URLs in the Wayback Machine that we saw a 200 response code from in 2017 based on a query we ran on Feb. 1st, 2018. The WARC files associated with this crawl are not currently available to the general public.
Wayback indexes. This data is currently not publicly accessible.
The seed for this crawl was a list of every host in the Wayback Machine This crawl was run at a level 1 (URLs including their embeds, plus the URLs of all outbound links including their embeds) The WARC files associated with this crawl are not currently available to the general public.
Web wide crawl with initial seedlist and crawler configuration from August 2013.
The seed for this crawl was a list of every host in the Wayback Machine This crawl was run at a level 1 (URLs including their embeds, plus the URLs of all outbound links including their embeds) The WARC files associated with this crawl are not currently available to the general public.
Web wide crawl with initial seedlist and crawler configuration from January 2012 using HQ software.
Web wide crawl with initial seedlist and crawler configuration from April 2012.
Web wide crawl with initial seedlist and crawler configuration from February 2014.
The seed for this crawl was a list of every host in the Wayback Machine This crawl was run at a level 1 (URLs including their embeds, plus the URLs of all outbound links including their embeds) The WARC files associated with this crawl are not currently available to the general public.
Web wide crawl with initial seedlist and crawler configuration from October 2010
Crawls of International News Sites
Screen captures of hosts discovered during wide crawls. This data is currently not publicly accessible.
Wide crawls of the Internet conducted by Internet Archive. Access to content is restricted. Please visit the Wayback Machine to explore archived web sites.
Web wide crawl with initial seedlist and crawler configuration from March 2011 using HQ software.
This collection includes web crawls of the Federal Executive, Legislative, and Judicial branches of government performed at the end of US presidential terms of office.
Topics: web, end of term, US, federal government
Web wide crawl with initial seedlist and crawler configuration from September 2012.
Survey crawl of .com domains started January 2011.
Topic: webcrawl
The seed for this crawl was a list of every host in the Wayback Machine This crawl was run at a level 1 (URLs including their embeds, plus the URLs of all outbound links including their embeds) The WARC files associated with this crawl are not currently available to the general public.
Web wide crawl with initial seedlist and crawler configuration from March 2011. This uses the new HQ software for distributed crawling by Kenji Nagahashi. What’s in the data set: Crawl start date: 09 March, 2011 Crawl end date: 23 December, 2011 Number of captures: 2,713,676,341 Number of unique URLs: 2,273,840,159 Number of hosts: 29,032,069 The seed list for this crawl was a list of Alexa’s top 1 million web sites, retrieved close to the crawl start date. We used Heritrix (3.1.1-SNAPSHOT)...
201.1M
201M
Nov 12, 2013
11/13
by
ximm@archive.org
Miscellaneous high-value news sites
Topics: World news, US news, news
Crawl of outlinks from wikipedia.org started February, 2012. These files are currently not publicly accessible.
Data crawled by Sloan Foundation on behalf of Internet Archive
Captures of pages from YouTube. Currently these are discovered by searching for YouTube links on Twitter.
Topics: YouTube, Twitter, Video
This collection contains web crawls performed on the US Federal Executive, Legislative & Judicial branches of government in 2020-2021. Information about this project can be found here: https://end-of-term.github.io/eotarchive/ You can submit URLs to be archived here: https://digital2.library.unt.edu/nomination/eth2020/add/
Hacker News Crawl of their links.
Crawl of outlinks from wikipedia.org started May, 2011. These files are currently not publicly accessible.
Shallow crawls that collect content 1 level deep including embeds. This data is currently not publicly accessible.
This collection contains web crawls performed as part of the End of Term Web Archive, a collaborative project that aims to preserve the U.S. federal government web presence at each change of administration. Content includes publicly-accessible government websites hosted on .gov, .mil, and relevant non-.gov domains, as well as government social media materials. The web archiving was performed in the Fall and Winter of 2016 and Spring of 2017. For more information, see...
Topics: end of term, federal government, 2016, president, congress, government data
Geocities crawl performed by Internet Archive. This data is currently not publicly accessible. from Wikipedia : Yahoo! GeoCities is a Web hosting service. GeoCities was originally founded by David Bohnett and John Rezner in late 1994 as Beverly Hills Internet (BHI), and by 1999 GeoCities was the third-most visited Web site on the World Wide Web. In its original form, site users selected a "city" in which to place their Web pages. The "cities" were metonymously named after...
CDX Index shards for the Wayback Machine. The Wayback Machine works by looking for historic URL's based on a query. This is done by searching an index of all the web objects (pages, images, etc) that have been archived over the years. This collection holds the index used for this purpose, which is broken up into 300 pieces so they fit into items more naturally and distribute the lookup load. Each of these 300 pieces is stored in at least 2 items, and then those are also stored on the backup...
Crawl of outlinks from wikipedia.org started July, 2011. These files are currently not publicly accessible.
45.7M
46M
web
eye 45.7M
favorite 0
comment 0
COM survey crawl data collected by Internet Archive in 2009-2010. This data is currently not publicly accessible.
Shallow crawl started 2013 that collects content 1 level deep, including embeds. Access to content is restricted. Please visit the Wayback Machine to explore archived web sites.
Shallow crawl started 2013 that collects content 1 level deep, including embeds. Access to content is restricted. Please visit the Wayback Machine to explore archived web sites.
This collection contains web crawls performed as the pre-inauguration crawl for part of the End of Term Web Archive, a collaborative project that aims to preserve the U.S. federal government web presence at each change of administration. Content includes publicly-accessible government websites hosted on .gov, .mil, and relevant non-.gov domains, as well as government social media materials. The web archiving was performed in the Fall and Winter of 2016 to capture websites prior to the January...
Topics: end of term, federal government, 2016, president, congress
Survey crawl of .net domains started December 2010.
Topic: webcrawl
This collection contains web crawls performed as the post-inauguration crawl for part of the End of Term Web Archive, a collaborative project that aims to preserve the U.S. federal government web presence at each change of administration. Content includes publicly-accessible government websites hosted on .gov, .mil, and relevant non-.gov domains, as well as government social media materials. The web archiving was performed in the Winter of 2016 and Spring of 2017 to capture websites...
Topics: end of term, federal government, 2016, president, congress
47.2M
47M
Dec 13, 2012
12/12
by
ximm@archive.org
This collection contains web crawls performed on the US Federal Executive, Legislative & Judicial branches of government in 2012-2013.
Topics: end of term, US, Federal government, 2012, Obama
this data is currently not publicly accessible.
This collaborative project is an extension of the 2016 End of Term project, intended to document the federal government's web presence by archiving government websites and data. As part of this preservation effort, URLs supplied from partner institutions, as well as nominated by the public, will be crawled regularly to provide an on-going view of federal agencies' web and social media presence. Key partners on this effort are the Environmental Data & Governance...
Topics: government, data, federal, congress
Crawl of International News Sites with initial seedlist and crawler configuration from Sep 1, 2010.
Survey of .org domains. This data is currently not publicly accessible.
9.2M
9.2M
web
eye 9.2M
favorite 0
comment 0
TEST COLLECTION: Crawl of .edu and .gov sites started in June 2010.
Topic: crawldata
Internet Archive crawldata from Webwide Crawl, captured by crawl427.us.archive.org:wide from Sun Oct 16 02:25:38 PDT 2016 to Sat Oct 15 21:19:02 PDT 2016.
Topic: crawldata
End of Term 2016 Web Archive government web crawls by project partner the University of North Texas.
Topics: end of term, federal government, 2016, president, congress, university of north texas
End of Term 2016 Web Archive government web crawls by project partner the Library of Congress.
Topics: end of term, federal government, 2016, president, congress, library of congress, web, data
Survey crawl of .net domains started October 2011.
Topics: webwidecrawl, net
8.9M
8.9M
web
eye 8.9M
favorite 1
comment 0
End of term 2008 crawl data gathered by Internet Archive on behalf of the California Digital Library. This data is currently not publicly accessible.
Data crawled from YouTube.com in 2007 by Internet Archive. These files are not currently accessible.
Web wide crawl with initial seedlist and crawler configuration from September 2010